146 research outputs found

    Combining remote sensing and ground census data to develop new maps of the distribution of rice agriculture in China

    Get PDF
    Large-scale assessments of the potential for food production and its impact on biogeochemical cycling require the best possible information on the distribution of cropland. This information can come from ground-based agricultural census data sets and/or spaceborne remote sensing products, both with strengths and weaknesses. Official cropland statistics for China contain much information on the distribution of crop types, but are known to significantly underestimate total cropland areas and are generally at coarse spatial resolution. Remote sensing products can provide moderate to fine spatial resolution estimates of cropland location and extent, but supply little information on crop type or management. We combined county-scale agricultural census statistics on total cropland area and sown area of 17 major crops in 1990 with a fine-resolution land-cover map derived from 1995–1996 optical remote sensing (Landsat) data to generate 0.5° resolution maps of the distribution of rice agriculture in mainland China. Agricultural census data were used to determine the fraction of crop area in each 0.5° grid cell that was in single rice and each of 10 different multicrop paddy rice rotations (e.g., winter wheat/rice), while the remote sensing land-cover product was used to determine the spatial distribution and extent of total cropland in China. We estimate that there were 0.30 million km2 of paddy rice cropland; 75% of this paddy land was multicropped, and 56% had two rice plantings per year. Total sown area for paddy rice was 0.47 million km2. Paddy rice agriculture occurred on 23% of all cultivated land in China

    Greenhouse gas emissions from croplands of China

    Get PDF
    China possesses cropland of 1.33 million km 2. Cultivation of the cropland not only altered the biogeochemical cycles of carbon (C) and nitrogen (N) in the agroecosystems but also affected global climate. The impacts of agroecosystems on global climate attribute to emissions of three greenhouse gases, namely carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O)

    Dealloyed porous gold anchored by: In situ generated graphene sheets as high activity catalyst for methanol electro-oxidation reaction

    Get PDF
    A novel one-step method to prepare the nanocomposites of reduced graphene oxide (RGO)/nanoporous gold (NPG) is realized by chemically dealloying an Al2Au precursor. The RGO nanosheets anchored on the surface of NPG have a cicada wing like shape and act as both conductive agent and buffer layer to improve the catalytic ability of NPG for methanol electro-oxidation reaction (MOR). This improvement can also be ascribed to the microstructure change of NPG in dealloying with RGO. This work inspires a facile and economic method to prepare the NPG based catalyst for MOR

    Patient-specific approach using data fusion and adversarial training for epileptic seizure prediction

    Get PDF
    Epilepsy is the second common neurological disorder after headache, accurate and reliable prediction of seizures is of great clinical value. Most epileptic seizure prediction methods consider only the EEG signal or extract and classify the features of EEG and ECG signals separately, the improvement of prediction performance from multimodal data is not fully considered. In addition, epilepsy data are time-varying, with differences between each episode in a patient, making it difficult for traditional curve-fitting models to achieve high accuracy and reliability. In order to improve the accuracy and reliability of the prediction system, we propose a novel personalized approach based on data fusion and domain adversarial training to predict epileptic seizures using leave-one-out cross-validation, which achieves an average accuracy, sensitivity and specificity of 99.70, 99.76, and 99.61%, respectively, with an average error alarm rate (FAR) of 0.001. Finally, the advantage of this approach is demonstrated by comparison with recent relevant literature. This method will be incorporated into clinical practice to provide personalized reference information for epileptic seizure prediction

    Survey on Video Object Tracking Algorithms

    Get PDF
    Video object tracking is an important research content in the field of computer vision, mainly studying the tracking of objects with interest in video streams or image sequences. Video object tracking has been widely used in cameras and surveillance, driverless, precision guidance and other fields. Therefore, a comprehensive review on video object tracking algorithms is of great significance. Firstly, according to different sources of challenges, the challenges faced by video object tracking are classified into two aspects, the objects’ factors and the backgrounds’ factors, and summed up respectively. Secondly, the typical video object tracking algorithms in recent years are classified into correlation filtering video object tracking algorithms and deep learning video object tracking algorithms. And further the correlation filtering video object tracking algorithms are classified into three categories: kernel correlation filtering algorithms, scale adaptive correlation filtering algorithms and multi-feature fusion corre-lation filtering algorithms. The deep learning video object tracking algorithms are classified into two categories: video object tracking algorithms based on siamese network and based on convolutional neural network. This paper analyzes various algorithms from the aspects of research motivation, algorithm ideas, advantages and disadvantages. Then, the widely used datasets and evaluation indicators are introduced. Finally, this paper sums up the research and looks forward to the development trends of video object tracking in the future

    Pixel-level intra-domain adaptation for semantic segmentation

    Get PDF
    Recent advances in unsupervised domain adaptation have achieved remarkable performance on semantic segmentation tasks. Despite such progress, existing works mainly focus on bridging the inter-domain gaps between the source and target domain, while only few of them noticed the intra-domain gaps within the target data. In this work, we propose a pixel-level intra-domain adaptation approach to reduce the intra-domain gaps within the target data. Compared with image-level methods, ours treats each pixel as an instance, which adapts the segmentation model at a more fine-grained level. Specifically, we first conduct the inter-domain adaptation between the source and target domain; Then, we separate the pixels in target images into the easy and hard subdomains; Finally, we propose a pixel-level adversarial training strategy to adapt a segmentation network from the easy to the hard subdomain. Moreover, we show that the segmentation accuracy can be further improved by incorporating a continuous indexing technique in the adversarial training. Experimental results show the effectiveness of our method against existing state-of-the-art approaches

    On the Impact of Flaky Tests in Automated Program Repair

    Get PDF
    The literature of Automated Program Repair is largely dominated by approaches that leverage test suites not only to expose bugs but also to validate the generated patches. Unfortunately, beyond the widely-discussed concern that test suites are an imperfect oracle because they can be incomplete, they can include tests that are flaky. A flaky test is one that can be passed or failed by a program in a non-deterministic way. Such tests are generally carefully removed from the repair benchmarks. In practice, however, flaky tests are available test suite of software repositories. To the best of our knowledge, no study has discussed this threat to validity for evaluation of program repair. In this work, we highlight this threat and further investigate the impact of flaky tests by reverting their removal from the Defects4J benchmark. Our study aims to characterize the impact of flaky tests for localizing bugs and the eventual influence on the repair performance. Among other insights, we find that (1) although flaky tests are few (≈0.3%) of total tests, they affect experiments related to a large proportion (98.9%) of Defects4J real-world faults; (2) most flaky tests (98%) actually provide deterministic results under specific environment configurations (with the jdk version influencing the results); (3) flaky tests drastically hinder the effectiveness of spectrum-based fault localization (e.g., the rankings of 90 bugs drop down while none of the bugs obtains better location results compared with results achieved without flaky tests); and (4) the repairability of APR tools is greatly affected by the presence of flaky tests (e.g., 10 state of the art APR tools can now fix significantly fewer bugs than when the benchmark is manually curated to remove flaky tests). Given that the detection of flaky tests is still nascent, we call for the program repair community to relax the artificial assumption that the test suite is free from flaky tests. One direction that we propose is to consider developing strategies where patches that partially-fix bugs are considered worthwhile: a patch may make the program pass some test cases but fail some (which may actually be the flaky ones)
    • …
    corecore